104 research outputs found

    Improving Generative Model-based Unfolding with Schr\"{o}dinger Bridges

    Full text link
    Machine learning-based unfolding has enabled unbinned and high-dimensional differential cross section measurements. Two main approaches have emerged in this research area: one based on discriminative models and one based on generative models. The main advantage of discriminative models is that they learn a small correction to a starting simulation while generative models scale better to regions of phase space with little data. We propose to use Schroedinger Bridges and diffusion models to create SBUnfold, an unfolding approach that combines the strengths of both discriminative and generative models. The key feature of SBUnfold is that its generative model maps one set of events into another without having to go through a known probability density as is the case for normalizing flows and standard diffusion models. We show that SBUnfold achieves excellent performance compared to state of the art methods on a synthetic Z+jets dataset.Comment: 9 pages, 5 figure

    Identity-Guided Collaborative Learning for Cloth-Changing Person Reidentification

    Full text link
    Cloth-changing person reidentification (ReID) is a newly emerging research topic that is aimed at addressing the issues of large feature variations due to cloth-changing and pedestrian view/pose changes. Although significant progress has been achieved by introducing extra information (e.g., human contour sketching information, human body keypoints, and 3D human information), cloth-changing person ReID is still challenging due to impressionable pedestrian representations. Moreover, human semantic information and pedestrian identity information are not fully explored. To solve these issues, we propose a novel identity-guided collaborative learning scheme (IGCL) for cloth-changing person ReID, where the human semantic is fully utilized and the identity is unchangeable to guide collaborative learning. First, we design a novel clothing attention degradation stream to reasonably reduce the interference caused by clothing information where clothing attention and mid-level collaborative learning are employed. Second, we propose a human semantic attention and body jigsaw stream to highlight the human semantic information and simulate different poses of the same identity. In this way, the extraction features not only focus on human semantic information that is unrelated to the background but also are suitable for pedestrian pose variations. Moreover, a pedestrian identity enhancement stream is further proposed to enhance the identity importance and extract more favorable identity robust features. Most importantly, all these streams are jointly explored in an end-to-end unified framework, and the identity is utilized to guide the optimization. Extensive experiments on five public clothing person ReID datasets demonstrate that the proposed IGCL significantly outperforms SOTA methods and that the extracted feature is more robust, discriminative, and clothing-irrelevant

    Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models

    Full text link
    Pre-trained vision-language models, e.g., CLIP, working with manually designed prompts have demonstrated great capacity of transfer learning. Recently, learnable prompts achieve state-of-the-art performance, which however are prone to overfit to seen classes, failing to generalize to unseen classes. In this paper, we propose a Knowledge-Aware Prompt Tuning (KAPT) framework for vision-language models. Our approach takes inspiration from human intelligence in which external knowledge is usually incorporated into recognizing novel categories of objects. Specifically, we design two complementary types of knowledge-aware prompts for the text encoder to leverage the distinctive characteristics of category-related external knowledge. The discrete prompt extracts the key information from descriptions of an object category, and the learned continuous prompt captures overall contexts. We further design an adaptation head for the visual encoder to aggregate salient attentive visual cues, which establishes discriminative and task-aware visual representations. We conduct extensive experiments on 11 widely-used benchmark datasets and the results verify the effectiveness in few-shot image classification, especially in generalizing to unseen categories. Compared with the state-of-the-art CoCoOp method, KAPT exhibits favorable performance and achieves an absolute gain of 3.22% on new classes and 2.57% in terms of harmonic mean.Comment: Accepted by ICCV 202

    Contrast-augmented Diffusion Model with Fine-grained Sequence Alignment for Markup-to-Image Generation

    Full text link
    The recently rising markup-to-image generation poses greater challenges as compared to natural image generation, due to its low tolerance for errors as well as the complex sequence and context correlations between markup and rendered image. This paper proposes a novel model named "Contrast-augmented Diffusion Model with Fine-grained Sequence Alignment" (FSA-CDM), which introduces contrastive positive/negative samples into the diffusion model to boost performance for markup-to-image generation. Technically, we design a fine-grained cross-modal alignment module to well explore the sequence similarity between the two modalities for learning robust feature representations. To improve the generalization ability, we propose a contrast-augmented diffusion model to explicitly explore positive and negative samples by maximizing a novel contrastive variational objective, which is mathematically inferred to provide a tighter bound for the model's optimization. Moreover, the context-aware cross attention module is developed to capture the contextual information within markup language during the denoising process, yielding better noise prediction results. Extensive experiments are conducted on four benchmark datasets from different domains, and the experimental results demonstrate the effectiveness of the proposed components in FSA-CDM, significantly exceeding state-of-the-art performance by about 2%-12% DTW improvements. The code will be released at https://github.com/zgj77/FSACDM.Comment: Accepted to ACM MM 2023. The code will be released at https://github.com/zgj77/FSACD

    Greenhouse gas emissions from municipal wastewater treatment facilities in China from 2006 to 2019

    Get PDF
    Wastewater treatment plants (WWTPs) alleviate water pollution but also induce resource consumption and environmental impacts especially greenhouse gas (GHG) emissions. Mitigating GHG emissions of WWTPs can contribute to achieving carbon neutrality in China. But there is still a lack of a high-resolution and time-series GHG emission inventories of WWTPs in China. In this study, we construct a firm-level emission inventory of WWTPs for CH4, N2O and CO2 emissions from different wastewater treatment processes, energy consumption and effluent discharge for the time-period from 2006 to 2019. We aim to develop a transparent, verifiable and comparable WWTP GHG emission inventory to support GHG mitigation of WWTPs in China

    Reassortant between Human-Like H3N2 and Avian H5 Subtype Influenza A Viruses in Pigs: A Potential Public Health Risk

    Get PDF
    Human-like H3N2 influenza viruses have repeatedly been transmitted to domestic pigs in different regions of the world, but it is still uncertain whether any of these variants could become established in pig populations. The fact that different subtypes of influenza viruses have been detected in pigs makes them an ideal candidate for the genesis of a possible reassortant virus with both human and avian origins. However, the determination of whether pigs can act as a “mixing vessel” for a possible future pandemic virus is still pending an answer. This prompted us to gather the epidemiological information and investigate the genetic evolution of swine influenza viruses in Jilin, China.Nasopharyngeal swabs were collected from pigs with respiratory illness in Jilin province, China from July 2007 to October 2008. All samples were screened for influenza A viruses. Three H3N2 swine influenza virus isolates were analyzed genetically and phylogenetically.Influenza surveillance of pigs in Jilin province, China revealed that H3N2 influenza viruses were regularly detected from domestic pigs during 2007 to 2008. Phylogenetic analysis revealed that two distinguishable groups of H3N2 influenza viruses were present in pigs: the wholly contemporary human-like H3N2 viruses (represented by the Moscow/10/99-like sublineage) and double-reassortant viruses containing genes from contemporary human H3N2 viruses and avian H5 viruses, both co-circulating in pig populations.The present study reports for the first time the coexistence of wholly human-like H3N2 viruses and double-reassortant viruses that have emerged in pigs in Jilin, China. It provides updated information on the role of pigs in interspecies transmission and genetic reassortment of influenza viruses
    corecore